16 research outputs found

    Automatic Segmentation of the Mandible for Three-Dimensional Virtual Surgical Planning

    Get PDF
    Three-dimensional (3D) medical imaging techniques have a fundamental role in the field of oral and maxillofacial surgery (OMFS). 3D images are used to guide diagnosis, assess the severity of disease, for pre-operative planning, per-operative guidance and virtual surgical planning (VSP). In the field of oral cancer, where surgical resection requiring the partial removal of the mandible is a common treatment, resection surgery is often based on 3D VSP to accurately design a resection plan around tumor margins. In orthognathic surgery and dental implant surgery, 3D VSP is also extensively used to precisely guide mandibular surgery. Image segmentation from the radiography images of the head and neck, which is a process to create a 3D volume of the target tissue, is a useful tool to visualize the mandible and quantify geometric parameters. Studies have shown that 3D VSP requires accurate segmentation of the mandible, which is currently performed by medical technicians. Mandible segmentation was usually done manually, which is a time-consuming and poorly reproducible process. This thesis presents four algorithms for mandible segmentation from CT and CBCT and contributes to some novel ideas for the development of automatic mandible segmentation for 3D VSP. We implement the segmentation approaches on head and neck CT/CBCT datasets and then evaluate the performance. Experimental results show that our proposed approaches for mandible segmentation in CT/CBCT datasets exhibit high accuracy

    Association between the quality of plant-based diets and periodontitis in the U.S. general population

    Get PDF
    Aim: To investigate the relationship between plant-based diet indices (PDIs) and periodontitis and serum IgG antibodies against periodontopathogens in the U.S. population. Materials and Methods: We analysed cross-sectional data on 5651 participants ≥40 years of age from the Third National Health and Nutrition Examination Survey. Food frequency questionnaire data were used to calculate the overall PDI, healthy plant-based diet index (hPDI), and unhealthy plant-based diet index (uPDI). Periodontitis was defined using a half-reduced Centers for Disease Control and Prevention and American Academy of Periodontology case definition. Serum antibodies against 19 periodontopathogens were used to classify the population into two subgroups using hierarchical clustering. Survey-weighted multivariable logistic regressions were applied to assess the associations of PDI/hPDI/uPDI z-scores with periodontitis and hierarchical clusters after adjusting for potential confounders. Results: A total of 2841 (50.3%) participants were defined as having moderate/severe periodontitis. The overall PDI z-score was not significantly associated with the clinical and bacterial markers of periodontitis. By considering the healthiness of plant foods, we observed an inverse association between hPDI z-score and periodontitis (odds ratio [OR] = 0.925, 95% confidence interval [CI]: 0.860–0.995). In contrast, higher uPDI z-score (adherence to unhealthful plant foods) might increase the risk of periodontitis (OR = 1.100; 95% CI: 1.043–1.161). Regarding antibodies against periodontopathogens, the participants in cluster 2 had higher periodontal antibodies than those in cluster 1. The hPDI z-score was positively associated with cluster 2 (OR = 1.192; 95% CI: 1.112–1.278). In contrast, an inverse association between uPDI z-score and cluster 2 was found (OR = 0.834; 95% CI: 0.775–0.896). Conclusions: Plant-based diets were associated with periodontitis, depending on their quality. A healthy plant-based diet was inversely related to an increased risk of periodontitis but positively related to elevated antibody levels against periodontopathogens. For an unhealthy plant-based diet, the opposite trends were observed.</p

    Association between the quality of plant-based diets and periodontitis in the US general population

    Get PDF
    AIM: This study investigated the relationships of plant-based diet indices with periodontitis and serum IgG antibodies against periodontopathogens in the US population. MATERIALS AND METHODS: This study analyzed cross-sectional data on 5,651 participants ≥40 years of age from the Third National Health and Nutrition Examination Survey. Food frequency questionnaire data were used to calculate the overall plant-based diet index (PDI), healthful plant-based diet index (hPDI), and unhealthful plant-based diet index (uPDI). Periodontitis was defined using a half-reduced CDC/AAP (Centers for Disease Control and Prevention and American Academy of Periodontology) case definition. Serum antibodies against 19 periodontopathogens were used to classify the population into two subgroups using hierarchical clustering. Survey-weighted multivariable logistic regressions were applied to assess the associations of PDI/hPDI/uPDI z-scores with periodontitis and hierarchical clusters after adjusting for potential confounders. RESULTS: A total of 2,841 (50.3) participants were defined as having moderate/severe periodontitis. Overall PDI z-score was not significantly associated with the clinical and bacterial markers of periodontitis. By considering the healthiness of plant foods, we observed an inverse association between hPDI z-score and periodontitis (odds ratio [OR] = 0.925, 95% confidence interval = 0.860 to 0.995). In contrast, higher uPDI z-score (adherence to unhealthful plant foods) might increase the risk of periodontitis (OR = 1.100, 1.043 to 1.161). Regarding antibodies against periodontopathogens, the participants in cluster 2 had higher periodontal antibodies than those in cluster 1. The hPDI z-score was positively associated with cluster 2 (OR = 1.192, 1.112 to 1.278). In contrast, an inverse association between uPDI z-score and cluster 2 was found (OR = 0.834, 0.775 to 0.896). CONCLUSION: Plant-based diets were associated with periodontitis, depending on their quality. A healthy plant-based diet was inversely related to an increased risk of periodontitis but positively related to elevated antibody levels against periodontopathogens. For an unhealthy plant-based diet, the opposite trends were observed. This article is protected by copyright. All rights reserved

    Morphological Variation of the Mandible in the Orthognathic Population—A Morphological Study Using Statistical Shape Modelling

    Get PDF
    The aim of this study was to investigate the value of 3D Statistical Shape Modelling for orthognathic surgery planning. The goal was to objectify shape variations in the orthognathic population and differences between male and female patients by means of a statistical shape modelling method. Pre-operative CBCT scans of patients for whom 3D Virtual Surgical Plans (3D VSP) were developed at the University Medical Center Groningen between 2019 and 2020 were included. Automatic segmentation algorithms were used to create 3D models of the mandibles, and the statistical shape model was built through principal component analysis. Unpaired t-tests were performed to compare the principal components of the male and female models. A total of 194 patients (130 females and 64 males) were included. The mandibular shape could be visually described by the first five principal components: (1) The height of the mandibular ramus and condyles, (2) the variation in the gonial angle of the mandible, (3) the width of the ramus and the anterior/posterior projection of the chin, (4) the lateral projection of the mandible’s angle, and (5) the lateral slope of the ramus and the inter-condylar distance. The statistical test showed significant differences between male and female mandibular shapes in 10 principal components. This study demonstrates the feasibility of using statistical shape modelling to inform physicians about mandible shape variations and relevant differences between male and female mandibles. The information obtained from this study could be used to quantify masculine and feminine mandibular shape aspects and to improve surgical planning for mandibular shape manipulations.</p

    Robust and Accurate Mandible Segmentation on Dental CBCT Scans Affected by Metal Artifacts Using a Prior Shape Model

    Get PDF
    Accurate mandible segmentation is significant in the field of maxillofacial surgery to guide clinical diagnosis and treatment and develop appropriate surgical plans. In particular, cone-beam computed tomography (CBCT) images with metal parts, such as those used in oral and maxillofacial surgery (OMFS), often have susceptibilities when metal artifacts are present such as weak and blurred boundaries caused by a high-attenuation material and a low radiation dose in image acquisition. To overcome this problem, this paper proposes a novel deep learning-based approach (SASeg) for automated mandible segmentation that perceives overall mandible anatomical knowledge. SASeg utilizes a prior shape feature extractor (PSFE) module based on a mean mandible shape, and recurrent connections maintain the continuity structure of the mandible. The effectiveness of the proposed network is substantiated on a dental CBCT dataset from orthodontic treatment containing 59 patients. The experiments show that the proposed SASeg can be easily used to improve the prediction accuracy in a dental CBCT dataset corrupted by metal artifacts. In addition, the experimental results on the PDDCA dataset demonstrate that, compared with the state-of-the-art mandible segmentation models, our proposed SASeg can achieve better segmentation performance

    Automatic segmentation of the mandible from computed tomography scans for 3D virtual surgical planning using the convolutional neural network

    Get PDF
    Segmentation of mandibular bone in CT scans is crucial for 3D virtual surgical planning of craniofacial tumor resection and free flap reconstruction of the resection defect, in order to obtain a detailed surface representation of the bones. A major drawback of most existing mandibular segmentation methods is that they require a large amount of expert knowledge for manual or partially automatic segmentation. In fact, due to the lack of experienced doctors and experts, high quality expert knowledge is hard to achieve in practice. Furthermore, segmentation of mandibles in CT scans is influenced seriously by metal artifacts and large variations in their shape and size among individuals. In order to address these challenges we propose an automatic mandible segmentation approach in CT scans, which considers the continuum of anatomical structures through different planes. The approach adopts the architecture of the U-Net and then combines the resulting 2D segmentations from three orthogonal planes into a 3D segmentation. We implement such a segmentation approach on two head and neck datasets and then evaluate the performance. Experimental results show that our proposed approach for mandible segmentation in CT scans exhibits high accuracy

    Automatic Segmentation of Mandible from Conventional Methods to Deep Learning-A Review

    Get PDF
    Medical imaging techniques, such as (cone beam) computed tomography and magnetic resonance imaging, have proven to be a valuable component for oral and maxillofacial surgery (OMFS). Accurate segmentation of the mandible from head and neck (H&N) scans is an important step in order to build a personalized 3D digital mandible model for 3D printing and treatment planning of OMFS. Segmented mandible structures are used to effectively visualize the mandible volumes and to evaluate particular mandible properties quantitatively. However, mandible segmentation is always challenging for both clinicians and researchers, due to complex structures and higher attenuation materials, such as teeth (filling) or metal implants that easily lead to high noise and strong artifacts during scanning. Moreover, the size and shape of the mandible vary to a large extent between individuals. Therefore, mandible segmentation is a tedious and time-consuming task and requires adequate training to be performed properly. With the advancement of computer vision approaches, researchers have developed several algorithms to automatically segment the mandible during the last two decades. The objective of this review was to present the available fully (semi)automatic segmentation methods of the mandible published in different scientific articles. This review provides a vivid description of the scientific advancements to clinicians and researchers in this field to help develop novel automatic methods for clinical applications

    Recurrent Convolutional Neural Networks for 3D Mandible Segmentation in Computed Tomography

    Get PDF
    PURPOSE: Classic encoder-decoder-based convolutional neural network (EDCNN) approaches cannot accurately segment detailed anatomical structures of the mandible in computed tomography (CT), for instance, condyles and coronoids of the mandible, which are often affected by noise and metal artifacts. The main reason is that EDCNN approaches ignore the anatomical connectivity of the organs. In this paper, we propose a novel CNN-based 3D mandible segmentation approach that has the ability to accurately segment detailed anatomical structures. METHODS: Different from the classic EDCNNs that need to slice or crop the whole CT scan into 2D slices or 3D patches during the segmentation process, our proposed approach can perform mandible segmentation on complete 3D CT scans. The proposed method, namely, RCNNSeg, adopts the structure of the recurrent neural networks to form a directed acyclic graph in order to enable recurrent connections between adjacent nodes to retain their connectivity. Each node then functions as a classic EDCNN to segment a single slice in the CT scan. Our proposed approach can perform 3D mandible segmentation on sequential data of any varied lengths and does not require a large computation cost. The proposed RCNNSeg was evaluated on 109 head and neck CT scans from a local dataset and 40 scans from the PDDCA public dataset. The final accuracy of the proposed RCNNSeg was evaluated by calculating the Dice similarity coefficient (DSC), average symmetric surface distance (ASD), and 95% Hausdorff distance (95HD) between the reference standard and the automated segmentation. RESULTS: The proposed RCNNSeg outperforms the EDCNN-based approaches on both datasets and yields superior quantitative and qualitative performances when compared to the state-of-the-art approaches on the PDDCA dataset. The proposed RCNNSeg generated the most accurate segmentations with an average DSC of 97.48%, ASD of 0.2170 mm, and 95HD of 2.6562 mm on 109 CT scans, and an average DSC of 95.10%, ASD of 0.1367 mm, and 95HD of 1.3560 mm on the PDDCA dataset. CONCLUSIONS: The proposed RCNNSeg method generated more accurate automated segmentations than those of the other classic EDCNN segmentation techniques in terms of quantitative and qualitative evaluation. The proposed RCNNSeg has potential for automatic mandible segmentation by learning spatially structured information

    CKD-TransBTS: Clinical Knowledge-Driven Hybrid Transformer with Modality-Correlated Cross-Attention for Brain Tumor Segmentation

    Full text link
    Brain tumor segmentation (BTS) in magnetic resonance image (MRI) is crucial for brain tumor diagnosis, cancer management and research purposes. With the great success of the ten-year BraTS challenges as well as the advances of CNN and Transformer algorithms, a lot of outstanding BTS models have been proposed to tackle the difficulties of BTS in different technical aspects. However, existing studies hardly consider how to fuse the multi-modality images in a reasonable manner. In this paper, we leverage the clinical knowledge of how radiologists diagnose brain tumors from multiple MRI modalities and propose a clinical knowledge-driven brain tumor segmentation model, called CKD-TransBTS. Instead of directly concatenating all the modalities, we re-organize the input modalities by separating them into two groups according to the imaging principle of MRI. A dual-branch hybrid encoder with the proposed modality-correlated cross-attention block (MCCA) is designed to extract the multi-modality image features. The proposed model inherits the strengths from both Transformer and CNN with the local feature representation ability for precise lesion boundaries and long-range feature extraction for 3D volumetric images. To bridge the gap between Transformer and CNN features, we propose a Trans&CNN Feature Calibration block (TCFC) in the decoder. We compare the proposed model with five CNN-based models and six transformer-based models on the BraTS 2021 challenge dataset. Extensive experiments demonstrate that the proposed model achieves state-of-the-art brain tumor segmentation performance compared with all the competitors
    corecore